1,657 research outputs found

    Creative Practice for Classical String Players with Live Looping

    Get PDF
    In recent years, string pedagogy discussions have highlighted the greater need for creative practice as classical string players. Since the second half of the nineteenth century, string methods have shifted towards a limited scope of improvisatory techniques, parallelling the decline of improvisation in Western classical music performance practices. This thesis explores live looping as a practice tool to facilitate learning concepts and help string players develop musicianship skills including improvisation, participate in non-classical genres, and explore their creative voices. Examining the results of string educators that incorporate live looping into their own teaching reveals the tool’s effectiveness in bridging curricula standards with opportunities for avenues of creativity and endless experimentation. Ultimately, live looping can help string players learn a concept more deeply, employing scaffolding techniques to practice abstract models and thus relying less on any specific example such as learning from sheet music. This encourages a broader musical foundation enabling classical string players to feel more equipped in areas beyond their comfort zones and participate in and enjoy a wider range of musically fulfilling experiences

    TADA: Task-Agnostic Dialect Adapters for English

    Full text link
    Large Language Models, the dominant starting point for Natural Language Processing (NLP) applications, fail at a higher rate for speakers of English dialects other than Standard American English (SAE). Prior work addresses this using task-specific data or synthetic data augmentation, both of which require intervention for each dialect and task pair. This poses a scalability issue that prevents the broad adoption of robust dialectal English NLP. We introduce a simple yet effective method for task-agnostic dialect adaptation by aligning non-SAE dialects using adapters and composing them with task-specific adapters from SAE. Task-Agnostic Dialect Adapters (TADA) improve dialectal robustness on 4 dialectal variants of the GLUE benchmark without task-specific supervision.Comment: 5 Pages; ACL Findings Paper 202

    Impressions: Understanding Visual Semiotics and Aesthetic Impact

    Full text link
    Is aesthetic impact different from beauty? Is visual salience a reflection of its capacity for effective communication? We present Impressions, a novel dataset through which to investigate the semiotics of images, and how specific visual features and design choices can elicit specific emotions, thoughts and beliefs. We posit that the impactfulness of an image extends beyond formal definitions of aesthetics, to its success as a communicative act, where style contributes as much to meaning formation as the subject matter. However, prior image captioning datasets are not designed to empower state-of-the-art architectures to model potential human impressions or interpretations of images. To fill this gap, we design an annotation task heavily inspired by image analysis techniques in the Visual Arts to collect 1,440 image-caption pairs and 4,320 unique annotations exploring impact, pragmatic image description, impressions, and aesthetic design choices. We show that existing multimodal image captioning and conditional generation models struggle to simulate plausible human responses to images. However, this dataset significantly improves their ability to model impressions and aesthetic evaluations of images through fine-tuning and few-shot adaptation.Comment: To be published in EMNLP 202

    Properly Learning Decision Trees with Queries Is NP-Hard

    Full text link
    We prove that it is NP-hard to properly PAC learn decision trees with queries, resolving a longstanding open problem in learning theory (Bshouty 1993; Guijarro-Lavin-Raghavan 1999; Mehta-Raghavan 2002; Feldman 2016). While there has been a long line of work, dating back to (Pitt-Valiant 1988), establishing the hardness of properly learning decision trees from random examples, the more challenging setting of query learners necessitates different techniques and there were no previous lower bounds. En route to our main result, we simplify and strengthen the best known lower bounds for a different problem of Decision Tree Minimization (Zantema-Bodlaender 2000; Sieling 2003). On a technical level, we introduce the notion of hardness distillation, which we study for decision tree complexity but can be considered for any complexity measure: for a function that requires large decision trees, we give a general method for identifying a small set of inputs that is responsible for its complexity. Our technique even rules out query learners that are allowed constant error. This contrasts with existing lower bounds for the setting of random examples which only hold for inverse-polynomial error. Our result, taken together with a recent almost-polynomial time query algorithm for properly learning decision trees under the uniform distribution (Blanc-Lange-Qiao-Tan 2022), demonstrates the dramatic impact of distributional assumptions on the problem.Comment: 41 pages, 10 figures, FOCS 202

    Multi-VALUE: A Framework for Cross-Dialectal English NLP

    Full text link
    Dialect differences caused by regional, social, and economic factors cause performance discrepancies for many groups of language technology users. Inclusive and equitable language technology must critically be dialect invariant, meaning that performance remains constant over dialectal shifts. Current systems often fall short of this ideal since they are designed and tested on a single dialect: Standard American English (SAE). We introduce a suite of resources for evaluating and achieving English dialect invariance. The resource is called Multi-VALUE, a controllable rule-based translation system spanning 50 English dialects and 189 unique linguistic features. Multi-VALUE maps SAE to synthetic forms of each dialect. First, we use this system to stress tests question answering, machine translation, and semantic parsing. Stress tests reveal significant performance disparities for leading models on non-standard dialects. Second, we use this system as a data augmentation technique to improve the dialect robustness of existing systems. Finally, we partner with native speakers of Chicano and Indian English to release new gold-standard variants of the popular CoQA task. To execute the transformation code, run model checkpoints, and download both synthetic and gold-standard dialectal benchmark datasets, see http://value-nlp.org.Comment: ACL 202

    A Query-Optimal Algorithm for Finding Counterfactuals

    Full text link
    We design an algorithm for finding counterfactuals with strong theoretical guarantees on its performance. For any monotone model f:Xdβ†’{0,1}f : X^d \to \{0,1\} and instance x⋆x^\star, our algorithm makes S(f)O(Ξ”f(x⋆))β‹…log⁑d {S(f)^{O(\Delta_f(x^\star))}\cdot \log d} queries to ff and returns {an {\sl optimal}} counterfactual for x⋆x^\star: a nearest instance xβ€²x' to x⋆x^\star for which f(xβ€²)β‰ f(x⋆)f(x')\ne f(x^\star). Here S(f)S(f) is the sensitivity of ff, a discrete analogue of the Lipschitz constant, and Ξ”f(x⋆)\Delta_f(x^\star) is the distance from x⋆x^\star to its nearest counterfactuals. The previous best known query complexity was d O(Ξ”f(x⋆))d^{\,O(\Delta_f(x^\star))}, achievable by brute-force local search. We further prove a lower bound of S(f)Ξ©(Ξ”f(x⋆))+Ξ©(log⁑d)S(f)^{\Omega(\Delta_f(x^\star))} + \Omega(\log d) on the query complexity of any algorithm, thereby showing that the guarantees of our algorithm are essentially optimal.Comment: 22 pages, ICML 202

    A Strong Composition Theorem for Junta Complexity and the Boosting of Property Testers

    Full text link
    We prove a strong composition theorem for junta complexity and show how such theorems can be used to generically boost the performance of property testers. The Ξ΅\varepsilon-approximate junta complexity of a function ff is the smallest integer rr such that ff is Ξ΅\varepsilon-close to a function that depends only on rr variables. A strong composition theorem states that if ff has large Ξ΅\varepsilon-approximate junta complexity, then g∘fg \circ f has even larger Ξ΅β€²\varepsilon'-approximate junta complexity, even for Ρ′≫Ρ\varepsilon' \gg \varepsilon. We develop a fairly complete understanding of this behavior, proving that the junta complexity of g∘fg \circ f is characterized by that of ff along with the multivariate noise sensitivity of gg. For the important case of symmetric functions gg, we relate their multivariate noise sensitivity to the simpler and well-studied case of univariate noise sensitivity. We then show how strong composition theorems yield boosting algorithms for property testers: with a strong composition theorem for any class of functions, a large-distance tester for that class is immediately upgraded into one for small distances. Combining our contributions yields a booster for junta testers, and with it new implications for junta testing. This is the first boosting-type result in property testing, and we hope that the connection to composition theorems adds compelling motivation to the study of both topics.Comment: 44 pages, 1 figure, FOCS 202

    Certification with an NP Oracle

    Get PDF
    In the certification problem, the algorithm is given a function ff with certificate complexity kk and an input x⋆x^\star, and the goal is to find a certificate of size ≀poly(k)\le \text{poly}(k) for ff's value at x⋆x^\star. This problem is in NPNP\mathsf{NP}^{\mathsf{NP}}, and assuming Pβ‰ NP\mathsf{P} \ne \mathsf{NP}, is not in P\mathsf{P}. Prior works, dating back to Valiant in 1984, have therefore sought to design efficient algorithms by imposing assumptions on ff such as monotonicity. Our first result is a BPPNP\mathsf{BPP}^{\mathsf{NP}} algorithm for the general problem. The key ingredient is a new notion of the balanced influence of variables, a natural variant of influence that corrects for the bias of the function. Balanced influences can be accurately estimated via uniform generation, and classic BPPNP\mathsf{BPP}^{\mathsf{NP}} algorithms are known for the latter task. We then consider certification with stricter instance-wise guarantees: for each x⋆x^\star, find a certificate whose size scales with that of the smallest certificate for x⋆x^\star. In sharp contrast with our first result, we show that this problem is NPNP\mathsf{NP}^{\mathsf{NP}}-hard even to approximate. We obtain an optimal inapproximability ratio, adding to a small handful of problems in the higher levels of the polynomial hierarchy for which optimal inapproximability is known. Our proof involves the novel use of bit-fixing dispersers for gap amplification.Comment: 25 pages, 2 figures, ITCS 202

    The Expression of irx7 in the Inner Nuclear Layer of Zebrafish Retina Is Essential for a Proper Retinal Development and Lamination.

    Get PDF
    Irx7, a member in the zebrafish iroquois transcription factor (TF) family, has been shown to control brain patterning. During retinal development, irx7\u27s expression was found to appear exclusively in the inner nuclear layer (INL) as soon as the prospective INL cells withdraw from the cell cycle and during retinal lamination. In Irx7-deficient retinas, the formation of a proper retinal lamination was disrupted and the differentiation of INL cell types, including amacrine, horizontal, bipolar and Muller cells, was compromised. Despite irx7\u27s exclusive expression in the INL, photoreceptors differentiation was also compromised in Irx7-deficient retinas. Compared with other retinal cell types, ganglion cells differentiated relatively well in these retinas, except for their dendritic projections into the inner plexiform layer (IPL). In fact, the neuronal projections of amacrine and bipolar cells into the IPL were also diminished. These indicate that the retinal lamination issue in the Irx7-deficient retinas is likely caused by the attenuation of the neurite outgrowth. Since the expression of known TFs that can specify specific retinal cell type was also altered in Irx7-deficient retinas, thus the irx7 gene network is possibly a novel regulatory circuit for retinal development and lamination
    • …
    corecore